260 research outputs found

    An explainable convolutional autoencoder model for unsupervised change detection

    Get PDF
    Abstract. Transfer learning methods reuse a deep learning model developed for a task on another task. Such methods have been remarkably successful in a wide range of image processing applications. Following the trend, few transfer learning based methods have been proposed for unsupervised multi-temporal image analysis and change detection (CD). Inspite of their success, the transfer learning based CD methods suffer from limited explainability. In this paper, we propose an explainable convolutional autoencoder model for CD. The model is trained in: 1) an unsupervised way using, as the bi-temporal images, patches extracted from the same geographic location; 2) a greedy fashion, one encoder and decoder layer pair at a time. A number of features relevant for CD is chosen from the encoder layer. To build an explainable model, only selected features from the encoder layer is retained and the rest is discarded. Following this, another encoder and decoder layer pair is added to the model in similar fashion until convergence. We further visualize the features to better interpret the learned features. We validated the proposed method on a Landsat-8 dataset obtained in Spain. Using a set of experiments, we demonstrate the explainability and effectiveness of the proposed model

    Self-supervised Multisensor Change Detection

    Get PDF
    Most change detection methods assume that pre-change and post-change images are acquired by the same sensor. However, in many real-life scenarios, e.g., natural disaster, it is more practical to use the latest available images before and after the occurrence of incidence, which may be acquired using different sensors. In particular, we are interested in the combination of the images acquired by optical and Synthetic Aperture Radar (SAR) sensors. SAR images appear vastly different from the optical images even when capturing the same scene. Adding to this, change detection methods are often constrained to use only target image-pair, no labeled data, and no additional unlabeled data. Such constraints limit the scope of traditional supervised machine learning and unsupervised generative approaches for multi-sensor change detection. Recent rapid development of self-supervised learning methods has shown that some of them can even work with only few images. Motivated by this, in this work we propose a method for multi-sensor change detection using only the unlabeled target bi-temporal images that are used for training a network in self-supervised fashion by using deep clustering and contrastive learning. The proposed method is evaluated on four multi-modal bi-temporal scenes showing change and the benefits of our self-supervised approach are demonstrated

    Patch-level unsupervised planetary change detection

    Get PDF
    Change detection (CD) is critical for analyzing data collected by planetary exploration missions, e.g., for identification of new impact craters. However, CD is still a relatively new topic in the context of planetary exploration. Sheer variation of planetary data makes CD much more challenging than in the case of Earth observation (EO). Unlike CD for EO, patch-level decision is preferred in planetary exploration as it is difficult to obtain perfect pixelwise alignment/co-registration between the bi-temporal planetary images. Lack of labeled bi-temporal data impedes supervised CD. To overcome these challenges, we propose an unsupervised CD method that exploits a pre-trained feature extractor to obtain bi-temporal deep features that are further processed using global max-pooling to obtain patch-level feature description. Bi-temporal patch-level features are further analyzed based on difference to determine whether a patch is changed. Additionally, a self-supervised method is proposed to estimate the decision boundary between the changed and unchanged patches. Experimental results on three planetary CD datasets from two different planetary bodies (Mars and Moon) demonstrate that the proposed method often outperforms supervised planetary CD methods. Code is available at https://gitlab.lrz.de/ai4eo/cd/-/tree/main/planetaryCDUnsup

    Out-of-distribution detection in satellite image classification

    Get PDF
    In satellite image analysis, distributional mismatch between the training and test data may arise due to several reasons, including unseen classes in the test data and differences in the geographic area. Deep learning based models may behave in unexpected manner when subjected to test data that has such distributional shifts from the training data, also called out-of-distribution (OOD) examples. Predictive uncertainly analysis is an emerging research topic which has not been explored much in context of satellite image analysis. Towards this, we adopt a Dirichlet Prior Network based model to quantify distributional uncertainty of deep learning models for remote sensing. The approach seeks to maximize the representation gap between the in-domain and OOD examples for a better identification of unknown examples at test time. Experimental results on three exemplary test scenarios show the efficacy of the model in satellite image analysis

    Towards Bridging the gap between Empirical and Certified Robustness against Adversarial Examples

    Full text link
    The current state-of-the-art defense methods against adversarial examples typically focus on improving either empirical or certified robustness. Among them, adversarially trained (AT) models produce empirical state-of-the-art defense against adversarial examples without providing any robustness guarantees for large classifiers or higher-dimensional inputs. In contrast, existing randomized smoothing based models achieve state-of-the-art certified robustness while significantly degrading the empirical robustness against adversarial examples. In this paper, we propose a novel method, called \emph{Certification through Adaptation}, that transforms an AT model into a randomized smoothing classifier during inference to provide certified robustness for â„“2\ell_2 norm without affecting their empirical robustness against adversarial attacks. We also propose \emph{Auto-Noise} technique that efficiently approximates the appropriate noise levels to flexibly certify the test examples using randomized smoothing technique. Our proposed \emph{Certification through Adaptation} with \emph{Auto-Noise} technique achieves an \textit{average certified radius (ACR) scores} up to 1.1021.102 and 1.1481.148 respectively for CIFAR-10 and ImageNet datasets using AT models without affecting their empirical robustness or benign accuracy. Therefore, our paper is a step towards bridging the gap between the empirical and certified robustness against adversarial examples by achieving both using the same classifier.Comment: An abridged version of this work has been presented at ICLR 2021 Workshop on Security and Safety in Machine Learning Systems: https://aisecure-workshop.github.io/aml-iclr2021/papers/2.pd

    Multitarget Domain Adaptation for Remote Sensing Classification Using Graph Neural Network

    Get PDF
    Remote sensing deals with huge variations in geography, acquisition season, and a plethora of sensors. Considering the difficulty of collecting labeled data uniformly representing all scenarios, data-hungry deep learning models are oftentrained with labeled data in a source domain that is limited in the above-mentioned aspects. Domain adaptation (DA) methods can adapt such model for applying on target domains with different distributions from the source domain. However, most remote sensing DA methods are designed for single-target, thus requiring a separate target classifier to be trained for each target domain. To mitigate this, we propose multitarget DA in which a single classifier is learned for multiple unlabeled target domains. To build a multitarget classifier, it may be beneficial to effectively aggregate features from the labeled source and different unlabeled target domains. Toward this, we exploit coteaching based on the graph neural network that is capable of leveraging unlabeled data. We use a sequential adaptation strategy that first adapts on the easier target domains assuming that the network finds it easier to adapt to the closest target domain. We validate the proposed method on two different datasets, representing geographical and seasonal variation. Code is available at https://gitlab.lrz.de/ai4eo/da-multitarget-gnn/

    SemiSiROC: Semisupervised Change Detection With Optical Imagery and an Unsupervised Teacher Model

    Get PDF
    Change detection (CD) is an important yet challenging task in remote sensing. In this article, we underline that the combination of unsupervised and supervised methods in a semisupervised framework improves CD performance. We rely on half-sibling regression for optical change detection (SiROC) as an unsupervised teacher model to generate pseudolabels (PLs) and select only the most confident PLs for pretraining different student models. Our results are robust to three different competitive student models, two semisupervised PL baselines, two benchmark datasets, and a variety of loss functions. While the performance gains are highest with a limited number of labels, a notable effect of PL pretraining persists when more labeled data are used. Further, we outline that the confidence selection of SiROC is indeed effective and that the performance gains generalize to scenes that were not used for PL training. Through the PL pretraining, SemiSiROC allows student models to learn more refined shapes of changes and makes them less sensitive to differences in acquisition conditions

    An Advanced Dirichlet Prior Network for Out-of-distribution Detection in Remote Sensing

    Get PDF
    This article introduces a compressive sensing (CS)-based approach for increasing bistatic synthetic aperture radar (SAR) imaging quality in the context of a multiaperture acquisition. The analyzed data were recorded over an opportunistic bistatic setup including a stationary ground-based-receiver opportunistic C-band bistatic SAR differential interferometry (COBIS) and Sentinel-1 C-band transmitter. Since the terrain observation by progressive scans (TOPS) mode is operated, the receiver can record synchronization pulses and echoed signals from the scene during many apertures. Hence, it is possible to improve the azimuth resolution by exploiting the multiaperture data. The recorded data are not contiguous and a naive integration of the chopped azimuth phase history would generate undesired grating lobes. The proposed processing scheme exploits the natural sparsity characterizing the illuminated scene. For azimuth profiles recovery greedy, convex, and nonconvex CS solvers are analyzed. The sparsifying basis/dictionary is constructed using the synthetically generated azimuth chirp derived considering Sentinel-1 orbital parameters and COBIS position. The chirped-based CS performance is further put in contrast with a Fourier-based CS method and an autoregressive model for signal reconstruction in terms of scene extent limitations and phase restoration efficiency. Furthermore, the analysis of different receiver-looking scenarios conducted to the insertion in the processing chain of a direct and an inverse Keystone transform for range cell migration (RCM) correction to cope with squinted geometries. We provide an extensive set of simulated and real-world results that prove the proposed workflow is efficient both in improving the azimuth resolution and in mitigating the sidelobes
    • …
    corecore